Benjamin Dieudonné doctoreerde over het 'Binauraal horen
met een cochleair implantaat en een hoorapparaat'
We geven graag een korte Nederlandse samenvatting van zijn proefschrift en aansluitende het Engelstalige abstract.
Nederlandstalige samenvatting
Iedereen weet hoe moeilijk het kan zijn om spraak te verstaan in rumoerige omgevingen zoals op café. In zo'n situaties maken we handig gebruik van onze twee oren: door (onbewust) te focussen op kleine geluidsverschillen tussen de oren kunnen we achterhalen van waar verschillende geluiden komen en ze beter uit elkaar houden. Bijvoorbeeld, een geluid dat van links komt, komt iets luider en iets vroeger (minder dan een milliseconde!) aan in je linkeroor dan in je rechteroor.
Dit horen met twee oren wordt "binauraal horen" genoemd. Helaas is binauraal horen voor personen met gehoorverlies een moeilijke tot soms onmogelijke taak, zelfs met de nieuwste gehoorondersteuning. Hierdoor gaan zij zich vaak sociaal isoleren, omdat ze moeilijk kunnen deelnemen aan gesprekken. Binauraal horen is in het bijzonder erg moeilijk voor personen met een cochleair implantaat in één oor (een elektronisch apparaat dat wordt geïmplanteerd door een chirurg, om de gehoorzenuw elektrisch te stimuleren) en een hoorapparaat in het andere oor (een klein oortelefoontje dat te stille geluiden versterkt). Niettemin is deze combinatie vaak de beste oplossing om het gehoor te herstellen voor personen met ernstig gehoorverlies.
Hoewel binauraal horen al decennialang beschreven staat in de wetenschappelijke literatuur, bestaat er nog steeds verwarring over de precieze mechanismen hierachter. In dit doctoraatsproject stelden we een nieuw model voor om de mechanismen achter binauraal horen beter te begrijpen. Daarnaast ontwikkelden we een nieuwe geluidsverwerkingsstrategie om binauraal horen te verbeteren in slechthorende personen. In beide delen van dit doctoraat hebben we ons voornamelijk gericht op personen met een cochleair implantaat in één oor en een hoorapparaat in het andere oor.
Abstract in het Engels
Cochlear implants (CIs) can restore the ability to hear in deaf people by electrical stimulation of the auditory nerve. Worldwide, more than 400 000 people have received cochlear implants. As a result of the great progress of CI technology during the last decades, an average cochlear implant user already performs better in speech understanding than many (regular) hearing aid users with severe to profound hearing loss. Therefore – although CIs were originally designed for the deaf – there is a steeply growing population of people with a cochlear implant that have some residual hearing. When their non-implanted ear is provided with a hearing aid, this is called bimodal stimulation.
An important advantage of a bimodal configuration is that it enables to hear with two ears. Normal-hearing listeners can compare information between the two ears to localize and differentiate sound sources, yielding great advantages in speech understanding in noisy situations; this is called binaural hearing. The auditory system uses so-called interaural level differences (ILDs) and interaural time differences (ITDs) to find out where a sound comes from: for example, a sound that comes from the left side, arrives louder and earlier in the left ear than in the right ear. Unfortunately, bimodal listeners do not experience these “binaural benefits” as normal-hearing listeners do. ITDs are practically unperceivable, amongst others because of sound distortions that are introduced by the CI processing.
Depending on their residual hearing in the non-implanted ear, ILDs are often almost unperceivable as well: many bimodal listeners have little-to-no residual hearing in the frequency region where ILDs are the largest, that is, in the high frequencies. In the first part of this thesis, we aimed for a better understanding of the binaural effects involved in speech perception for normal-hearing listeners and bimodal listeners. We established a theoretical framework to unambiguously define and relate often-used experimental measures of binaural hearing, such as spatial release from masking: the benefit in speech understanding due to spatial separation of speech and noise (as compared to spatially collocated speech and noise). The framework allowed to characterize spatial release from masking as a simple sum of head shadow – a monaural benefit due to an increase in signal-to-noise ratio in the ear that is the furthest from the noise – and binaural contrast – a binaural benefit due to processing of spatial information. For normal-hearing listeners, we found that head shadow had the largest contribution of the two terms.
Moreover, we found that binaural contrast could only be measured when ITDs were perceivable; ILDs never contributed to spatial release from masking. For bimodal listeners, we found that spatial release from masking could completely be explained in terms of monaural changes in signal-to-noise ratio, that is, there was never a binaural contrast benefit – not in the literature, neither in our own data with simulated bimodal listeners. Although there is a lot of speculation in the literature on binaural cue processing when discussing speech understanding in noisy environments, we found no evidence on whether this is even possible in bimodal listeners.
In the second part of this thesis, we propose a new binaural cue enhancement strategy for bimodal listeners: head shadow enhancement. The method introduces low-frequency ILDs by supplying each ear with a fixed beamformer that attenuates sounds from the contralateral side – similar to what the acoustic head shadow does in the high frequencies. As opposed to previously reported methods to introduce low-frequency ILDs, head shadow enhancement does not require direction-of-arrival estimations, neither does it result in considerable sound distortions, and it naturally handles multiple sound sources. Moreover, its low computational complexity makes the method promising for application in clinical devices. In an experiment with simulated bimodal listeners, head shadow enhancement resulted in large improvements in localization performance, and improvements in speech understanding in situations where a head shadow benefit would be expected (that is, with spatially separated speech and noise). Given these promising results, we developed a real-time implementation of head shadow enhancement, and performed a localization experiment in the free-field with real bimodal listeners. We found that head shadow enhancement could improve localization performance in the participant with poor baseline localization performance (1 of 3 participants), while it had no effect on the participants with fairly good baseline performance (2 of 3 participants).
Moreover, the benefit remained when head shadow enhancement was superposed on slight frontal microphone directionality. Altogether, this project led to (1) a better understanding of binaural speech perception in normal-hearing and bimodal listeners, and (2) a novel sound processing algorithm that is promising to be implemented in clinical devices to improve binaural hearing in bimodal listeners.